ℓ1-Penalized Linear Mixed-Effects Models for BCI

نویسندگان

  • Siamac Fazli
  • Márton Danóczy
  • Jürg Schelldorfer
  • Klaus-Robert Müller
چکیده

A recently proposed novel statistical model estimates population effects and individual variability between subgroups simultaneously, by extending Lasso methods. We apply this 1-penalized linear regression mixed-effects model to a large scale real world problem: by exploiting a large set of brain computer interface data we are able to obtain a subject-independent classifier that compares favorably with prior zerotraining algorithms. This unifying model inherently compensates shifts in the input space attributed to the individuality of a subject. In particular we are now able to differentiate within-subject and between-subject variability. A deeper understanding both of the underlying statistical and physiological structure of the data is gained.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ℓ1-penalized linear mixed-effects models for high dimensional data with application to BCI

Recently, a novel statistical model has been proposed to estimate population effects and individual variability between subgroups simultaneously, by extending Lasso methods. We will for the first time apply this so-called ℓ(1)-penalized linear regression mixed-effects model for a large scale real world problem: we study a large set of brain computer interface data and through the novel estimato...

متن کامل

Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including ...

متن کامل

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

A Generic Path Algorithm for Regularized Statistical Estimation.

Regularization is widely used in statistics and machine learning to prevent overfitting and gear solution towards prior information. In general, a regularized estimation problem minimizes the sum of a loss function and a penalty term. The penalty term is usually weighted by a tuning parameter and encourages certain constraints on the parameters to be estimated. Particular choices of constraints...

متن کامل

Penalized least squares versus generalized least squares representations of linear mixed models

The methods in the lme4 package for R for fitting linear mixed models are based on sparse matrix methods, especially the Cholesky decomposition of sparse positive-semidefinite matrices, in a penalized least squares representation of the conditional model for the response given the random effects. The representation is similar to that in Henderson’s mixed-model equations. An alternative represen...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011